A typical way to take advantage of the nodes of a Big Data environment are containers. A container is an aggregation of
different technologies that exist in the operating system and that allow an application to run, usually a single
process, within an operating system. In general, the container is completely linked to the life cycle of its process:
when the container is started, the container process begins, when the process ends, so does the container. The
container comprises only the application and its dependencies. It runs as an isolated process in user space on the host
operating system, sharing the kernel with other containers. It therefore partly enjoys the resource isolation and
resource allocation benefits of virtual machines, but is much more portable and efficient. Nonetheless, these
technologies have the purpose of doing a high-level management of the underlying hardware.
Reference:
-
D. Steenken, S. Voß, and R. Stahlbock, ‘Container terminal operation and operations research - a
classification and literature review’, Spectr., vol. 26, no. 1, pp. 3–49, Jan. 2004.
|